28 research outputs found

    Artificial Intelligence for Data Analysis and Signal Processing

    Get PDF
    Artificial intelligence, or AI, currently encompasses a huge variety of fields, from areas such as logical reasoning and perception, to specific tasks such as game playing, language processing, theorem proving, and diagnosing diseases. It is clear that systems with human-level intelligence (or even better) would have a huge impact on our everyday lives and on the future course of evolution, as it is already happening in many ways. In this research AI techniques have been introduced and applied in several clinical and real world scenarios, with particular focus on deep learning methods. A human gait identification system based on the analysis of inertial signals has been developed, leading to misclassification rates smaller than 0.15%. Advanced deep learning architectures have been also investigated to tackle the problem of atrial fibrillation detection from short length and noisy electrocardiographic signals. The results show a clear improvement provided by representation learning over a knowledge-based approach. Another important clinical challenge, both for the patient and on-board automatic alarm systems, is to detect with reasonable advance the patterns leading to risky situations, allowing the patient to take therapeutic decisions on the basis of future instead of current information. This problem has been specifically addressed for the prediction of critical hypo/hyperglycemic episodes from continuous glucose monitoring devices, carrying out a comparative analysis among the most successful methods for glucose event prediction. This dissertation also shows evidence of the benefits of learning algorithms for vehicular traffic anomaly detection, through the use of a statistical Bayesian framework, and for the optimization of video streaming user experience, implementing an intelligent adaptation engine for video streaming clients. The proposed solution explores the promising field of deep learning methods integrated with reinforcement learning schema, showing its benefits against other state of the art approaches. The great knowledge transfer capability of artificial intelligence methods and the benefits of representation learning systems stand out from this research, representing the common thread among all the presented research fields

    Tissue-specific mtDNA abundance from exome data and its correlation with mitochondrial transcription, mass and respiratory activity.

    Get PDF
    Eukaryotic cells contain a population of mitochondria, variable in number and shape, which in turn contain multiple copies of a tiny compact genome (mtDNA) whose expression and function is strictly coordinated with the nuclear one. mtDNA copy number varies between different cell or tissues types, both in response to overall metabolic and bioenergetics demands and as a consequence or cause of specific pathological conditions. Here we present a novel and reliable methodology to assess the effective mtDNA copy number per diploid genome by investigating off-target reads obtained by whole-exome sequencing (WES) experiments. We also investigate whether and how mtDNA copy number correlates with mitochondrial mass, respiratory activity and expression levels. Analyzing six different tissues from three age- and sex-matched human individuals, we found a highly significant linear correlation between mtDNA copy number estimated by qPCR and the frequency of mtDNA off target WES reads. Furthermore, mtDNA copy number showed highly significant correlation with mitochondrial gene expression levels as measured by RNA-Seq as well as with mitochondrial mass and respiratory activity. Our methodology makes thus feasible, at a large scale, the investigation of mtDNA copy number in diverse cell-types, tissues and pathological conditions or in response to specific treatments.This work was supported by Ministero dell'Istruzione, Università e Ricerca (projects PRIN-2009, Micromap [PON01_02589], Virtualab [PON01_01297]) and by Consiglio Nazionale delle Ricerche (progetto strategico “Medicina personalizzata”, progetto strategico “Invecchiamento”, progetto bandiera “Epigen”)

    IDNet: Smartphone-based gait recognition with convolutional neural networks

    No full text
    Here, we present IDNet, a user authentication framework from smartphone-acquired motion signals. Its goal is to recognize a target user from their way of walking, using the accelerometer and gyroscope (inertial) signals provided by a commercial smartphone worn in the front pocket of the user\u2019s trousers. IDNet features several innovations including: (i) a robust and smartphone-orientation-independent walking cycle extraction block, (ii) a novel feature extractor based on convolutional neural networks, (iii) a one-class support vector machine to classify walking cycles, and the coherent integration of these into (iv) a multi-stage authentication technique. IDNet is the first system that exploits a deep learning approach as universal feature extractors for gait recognition, and that combines classification results from subsequent walking cycles into a multi-stage decision making framework. Experimental results show the superiority of our approach against state-of-the-art techniques, leading to misclassification rates (either false negatives or positives) smaller than 0.15% with fewer than five walking cycles. Design choices are discussed and motivated throughout, assessing their impact on the user authentication performance

    Human authentication from ankle motion data using convolutional neural networks

    No full text
    We present a data acquisition and signal processing framework for the authentication of users from their gait signatures (accelerometer and gyroscope data). An ankle-worn inertial measurement unit (IMU) is utilized to acquire the raw motion data, which is pre-processed and used to train a number of signal processing tools, including a convolutional neural network (CNN) for the extraction of features as well as one-class single- and multi-stage classifiers. The CNN is trained (offline and only once) using a representative set of subjects and is then exploited as a universal feature extractor, i.e., to extract relevant features from walking patterns of previously unseen subjects. The one-class classifier is instead solely trained on the subject that we intend to authenticate (the target user). Scores from the one-class classifier are finally fed into a multi-stage decision maker, which performs a sequential decision testing for improved accuracy. The system operates in an online fashion, delivering excellent results, while requiring in the worst case fewer than five walking cycles to reliably authenticate the user

    D-DASH: a Deep Q-learning Framework for DASH Video Streaming

    No full text
    The ever-increasing demand for seamless high-definition video streaming, along with the widespread adoption of the Dynamic Adaptive Streaming over HTTP (DASH) standard, has been a major driver of the large amount of research on bitrate adaptation algorithms. The complexity and variability of the video content and of the mobile wireless channel make this an ideal application for learning approaches. Here, we present D-DASH, a framework that combines Deep Learning and Reinforcement Learning techniques to optimize the Quality of Experience (QoE) of DASH. Different learning architectures are proposed and assessed, combining feed-forward and recurrent deep neural networks with advanced strategies. D-DASH designs are thoroughly evaluated against prominent algorithms from the state-of-the-art, both heuristic and learning-based, evaluating performance indicators such as image quality across video segments and freezing/rebuffering events. Our numerical results are obtained on real and simulated channel traces and show the superiority of D-DASH in nearly all the considered quality metrics. Besides yielding a considerably higher QoE, the D-DASH framework exhibits faster convergence to the rate-selection strategy than the other learning algorithms considered in the study. This makes it possible to shorten the training phase, making D-DASH a good candidate for client-side runtime learning

    Prediction of Adverse Glycemic Events from Continuous Glucose Monitoring Signal

    No full text
    The most important objective of any diabetes therapy is to maintain the blood glucose concentration within the euglycemic range, avoiding or at least mitigating critical hypo/hyperglycemic episodes. Modern Continuous Glucose Monitoring (CGM) devices bear the promise of providing the patients with an increased and timely awareness of glycemic conditions as these get dangerously near to hypo/hyperglycemia. The challenge is to detect, with reasonable advance, the patterns leading to risky situations, allowing the patient to make therapeutic decisions on the basis of future (predicted) glucose concentration levels. We underline that a technically sound performance comparison of the approaches that have been proposed in recent years is still missing, and is thus unclear which one is to be preferred. The aim of this study is to fill this gap, by carrying out a comparative analysis among the most common methods for glucose event prediction. Both regression and classification algorithms have been implemented and analyzed, including static and dynamic training approaches. The dataset consists of 89 CGM time series measured in diabetic subjects for 7 subsequent days. Performance metrics, specifically defined to assess and compare the event prediction capabilities of the methods, have been introduced and analyzed. Our numerical results show that a static training approach exhibits better performance, in particular when regression methods are considered. However, classifiers show some improvement when trained for a specific event category, such as hyperglycemia, achieving performance comparable to the regressors, with the advantage of predicting the events sooner

    On the Effectiveness of Deep Representation Learning: the Atrial Fibrillation Case

    No full text
    The automatic and unsupervised analysis of biomedical time series is of primary importance for diagnostic and preventive medicine, enabling fast and reliable data processing to reveal clinical insights without the need for human intervention. Representation learning (RL) methods perform an automatic extraction of meaningful features that can be used, e.g., for a subsequent classification of the measured data. The goal of this study is to explore and quantify the benefits of RL techniques of varying degrees of complexity, focusing on modern deep learning (DL) architectures. We focus on the automatic classification of atrial fibrillation (AF) events from noisy single-lead electrocardiographic signals (ECG) obtained from wireless sensors. This is an important task as it allows the detection of sub-clinical AF which is hard to diagnose with a short in-clinic 12-lead ECG. The effectiveness of the considered architectures is quantified and discussed in terms of classification performance, memory/data efficiency and computational complexity
    corecore